skip to main content


Search for: All records

Creators/Authors contains: "Nguyen, Phuc"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In this paper, we present Jawthenticate, an earable system that authenticates a user using audible or inaudible speech without us- ing a microphone. This system can overcome the shortcomings of traditional voice-based authentication systems like unreliability in noisy conditions and spoofing using microphone-based replay attacks. Jawthenticate derives distinctive speech-related features from the jaw motion and associated facial vibrations. This combi- nation of features makes Jawthenticate resilient to vocal imitations as well as camera-based spoofing. We use these features to train a two-class SVM classifier for each user. Our system is invariant to the content and language of speech. In a study conducted with 41 subjects, who speak different native languages, Jawthenticate achieves a Balanced Accuracy (BAC) of 97.07%, True Positive Rate (TPR) of 97.75%, and True Negative Rate (TNR) of 96.4% with just 3 seconds of speech data. 
    more » « less
    Free, publicly-accessible full text available November 13, 2024
  2. Free, publicly-accessible full text available September 1, 2024
  3. Server-level power monitoring in data centers can significantly contribute to its efficient management. Nevertheless, due to the cost of a dedicated power meter for each server, most data center power management only focuses on UPS or cluster-level power monitoring. In this paper, we propose a low-cost novel power monitoring approach that uses only one sensor to extract power consumption information of all servers. We utilize the conducted electromagnetic interference of server power supplies to measure its power consumption from non-intrusive single-point voltage measurement. Using a pair of commercial grade Dell PowerEdge servers, we demonstrate that our approach can estimate each server's power consumption with ~3% mean absolute percentage error. 
    more » « less
  4. A bstract In order to study the chaotic behavior of a system with non-local interactions, we will consider weakly coupled non-commutative field theories. We compute the Lyapunov exponent of this exponential growth in the large Moyal-scale limit to leading order in the t’Hooft coupling and 1/ N . We found that in this limit, the Lyapunov exponent remains comparable in magnitude to (and somewhat smaller than) the exponent in the commutative case. This can possibly be explained by the infrared sensitivity of the Lyapunov exponent. Another possible explanation is that in examples of weakly coupled non-commutative field theories, non-local contributions to various thermodynamic quantities are sub-dominant. 
    more » « less
  5. In this paper, we present MuteIt, an ear-worn system for recognizing unvoiced human commands. MuteIt presents an intuitive alternative to voice-based interactions that can be unreliable in noisy environments, disruptive to those around us, and compromise our privacy. We propose a twin-IMU set up to track the user's jaw motion and cancel motion artifacts caused by head and body movements. MuteIt processes jaw motion during word articulation to break each word signal into its constituent syllables, and further each syllable into phonemes (vowels, visemes, and plosives). Recognizing unvoiced commands by only tracking jaw motion is challenging. As a secondary articulator, jaw motion is not distinctive enough for unvoiced speech recognition. MuteIt combines IMU data with the anatomy of jaw movement as well as principles from linguistics, to model the task of word recognition as an estimation problem. Rather than employing machine learning to train a word classifier, we reconstruct each word as a sequence of phonemes using a bi-directional particle filter, enabling the system to be easily scaled to a large set of words. We validate MuteIt for 20 subjects with diverse speech accents to recognize 100 common command words. MuteIt achieves a mean word recognition accuracy of 94.8% in noise-free conditions. When compared with common voice assistants, MuteIt outperforms them in noisy acoustic environments, achieving higher than 90% recognition accuracy. Even in the presence of motion artifacts, such as head movement, walking, and riding in a moving vehicle, MuteIt achieves mean word recognition accuracy of 91% over all scenarios. 
    more » « less